754 research outputs found

    Scale-Adaptive Neural Dense Features: Learning via Hierarchical Context Aggregation

    Get PDF
    How do computers and intelligent agents view the world around them? Feature extraction and representation constitutes one the basic building blocks towards answering this question. Traditionally, this has been done with carefully engineered hand-crafted techniques such as HOG, SIFT or ORB. However, there is no ``one size fits all'' approach that satisfies all requirements. In recent years, the rising popularity of deep learning has resulted in a myriad of end-to-end solutions to many computer vision problems. These approaches, while successful, tend to lack scalability and can't easily exploit information learned by other systems. Instead, we propose SAND features, a dedicated deep learning solution to feature extraction capable of providing hierarchical context information. This is achieved by employing sparse relative labels indicating relationships of similarity/dissimilarity between image locations. The nature of these labels results in an almost infinite set of dissimilar examples to choose from. We demonstrate how the selection of negative examples during training can be used to modify the feature space and vary it's properties. To demonstrate the generality of this approach, we apply the proposed features to a multitude of tasks, each requiring different properties. This includes disparity estimation, semantic segmentation, self-localisation and SLAM. In all cases, we show how incorporating SAND features results in better or comparable results to the baseline, whilst requiring little to no additional training. Code can be found at: https://github.com/jspenmar/SAND_featuresComment: CVPR201

    DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised Representation Learning

    Full text link
    In the current monocular depth research, the dominant approach is to employ unsupervised training on large datasets, driven by warped photometric consistency. Such approaches lack robustness and are unable to generalize to challenging domains such as nighttime scenes or adverse weather conditions where assumptions about photometric consistency break down. We propose DeFeat-Net (Depth & Feature network), an approach to simultaneously learn a cross-domain dense feature representation, alongside a robust depth-estimation framework based on warped feature consistency. The resulting feature representation is learned in an unsupervised manner with no explicit ground-truth correspondences required. We show that within a single domain, our technique is comparable to both the current state of the art in monocular depth estimation and supervised feature representation learning. However, by simultaneously learning features, depth and motion, our technique is able to generalize to challenging domains, allowing DeFeat-Net to outperform the current state-of-the-art with around 10% reduction in all error measures on more challenging sequences such as nighttime driving

    Same Features, Different Day: Weakly Supervised Feature Learning for Seasonal Invariance

    Get PDF
    "Like night and day" is a commonly used expression to imply that two things are completely different. Unfortunately, this tends to be the case for current visual feature representations of the same scene across varying seasons or times of day. The aim of this paper is to provide a dense feature representation that can be used to perform localization, sparse matching or image retrieval, regardless of the current seasonal or temporal appearance. Recently, there have been several proposed methodologies for deep learning dense feature representations. These methods make use of ground truth pixel-wise correspondences between pairs of images and focus on the spatial properties of the features. As such, they don't address temporal or seasonal variation. Furthermore, obtaining the required pixel-wise correspondence data to train in cross-seasonal environments is highly complex in most scenarios. We propose Deja-Vu, a weakly supervised approach to learning season invariant features that does not require pixel-wise ground truth data. The proposed system only requires coarse labels indicating if two images correspond to the same location or not. From these labels, the network is trained to produce "similar" dense feature maps for corresponding locations despite environmental changes. Code will be made available at: https://github.com/jspenmar/DejaVu_Feature

    Overview of lunar detection of ultra-high energy particles and new plans for the SKA

    Get PDF
    The lunar technique is a method for maximising the collection area for ultra-high-energy (UHE) cosmic ray and neutrino searches. The method uses either ground-based radio telescopes or lunar orbiters to search for Askaryan emission from particles cascading near the lunar surface. While experiments using the technique have made important advances in the detection of nanosecond-scale pulses, only at the very highest energies has the lunar technique achieved competitive limits. This is expected to change with the advent of the Square Kilometre Array (SKA), the low-frequency component of which (SKA-low) is predicted to be able to detect an unprecedented number of UHE cosmic rays. In this contribution, the status of lunar particle detection is reviewed, with particular attention paid to outstanding theoretical questions, and the technical challenges of using a giant radio array to search for nanosecond pulses. The activities of SKA’s High Energy Cosmic Particles Focus Group are described, as is a roadmap by which this group plans to incorporate this detection mode into SKA-low observations. Estimates for the sensitivity of SKA-low phases 1 and 2 to UHE particles are given, along with the achievable science goals with each stage. Prospects for near-future observations with other instruments are also described

    Effect of Discontinuation of Fluoride Intake from Water and Toothpaste on Urinary Excretion in Young Children

    Get PDF
    As there is no homeostatic mechanism for maintaining circulating fluoride (F) in the human body, the concentration may decrease and increase again when intake is interrupted and re-started. The present study prospectively evaluated this process in children exposed to F intake from water and toothpaste, using F in urine as a biomarker. Eleven children from Ibiá, Brazil (with sub-optimally fluoridated water supply) aged two to four years who regularly used fluoridated toothpaste (1,100 ppm F) took part in the study. Twenty-four-hour urine was collected at baseline (Day 0, F exposure from water and toothpaste) as well as after the interruption of fluoride intake from water and dentifrice (Days 1 to 28) (F interruption) and after fluoride intake from these sources had been re-established (Days 29 to 34) (F re-exposure). Urinary volume was measured, fluoride concentration was determined and the amount of fluoride excreted was calculated and expressed in mg F/day. Urinary fluoride excretion (UFE) during the periods of fluoride exposure, interruption and re-exposure was analyzed using the Wilcoxon test. Mean UFE was 0.25 mg F/day (SD: 0.15) at baseline, dropped to a mean of 0.14 mg F/day during F interruption (SD: 0.07; range: 0.11 to 0.17 mg F/day) and rose to 0.21 (SD: 0.09) and 0.19 (SD: 0.08) following F re-exposure. The difference between baseline UFE and the period of F interruption was statistically significant (p < 0.05), while the difference between baseline and the period of F re-exposure was non-significant (p > 0.05). The findings suggest that circulating F in the body of young children rapidly decreases in the first 24 hours and again increases very fast after discontinuation and re-exposure of F from water and toothpaste

    The Monocular Depth Estimation Challenge

    Get PDF
    This paper summarizes the results of the first Monocular Depth Estimation Challenge (MDEC) organized at WACV2023. This challenge evaluated the progress of self-supervised monocular depth estimation on the challenging SYNS-Patches dataset. The challenge was organized on CodaLab and received submissions from 4 valid teams. Participants were provided a devkit containing updated reference implementations for 16 State-of-the-Art algorithms and 4 novel techniques. The threshold for acceptance for novel techniques was to outperform every one of the 16 SotA baselines. All participants outperformed the baseline in traditional metrics such as MAE or AbsRel. However, pointcloud reconstruction metrics were challenging to improve upon. We found predictions were characterized by interpolation artefacts at object boundaries and errors in relative object positioning. We hope this challenge is a valuable contribution to the community and encourage authors to participate in future editions.Comment: WACV-Workshops 202

    The impact of COVID-19 pandemic on inequity in routine childhood vaccination coverage : a systematic review

    Get PDF
    Routine childhood vaccination coverage rates fell in many countries during the COVID-19 pandemic, but the impact of inequity on coverage is unknown. We synthesised evidence on inequities in routine childhood vaccination coverage (PROSPERO, CRD 42021257431). Studies reporting empirical data on routine vaccination coverage in children 0-18 years old during the COVID-19 pandemic by equity stratifiers were systematically reviewed. Nine electronic databases were searched between 1 January 2020 and 18 January 2022. The risk of bias was assessed using the Newcastle-Ottawa Quality Assessment Tool for Cohort Studies. Overall, 91 of 1453 studies were selected for full paper review, and thirteen met the inclusion criteria. The narrative synthesis found moderate evidence for inequity in reducing the vaccination coverage of children during COVID-19 lockdowns and moderately strong evidence for an increase in inequity compared with pre-pandemic months (before March 2020). Two studies reported higher rates of inequity among children aged less than one year, and one showed higher inequity rates in middle- compared with high-income countries. Evidence from a limited number of studies shows the effect of the pandemic on vaccine coverage inequity. Research from more countries is required to assess the global effect on inequity in coverage

    Feedback-informed treatment versus usual psychological treatment for depression and anxiety : a multisite, open-label, cluster randomised controlled trial

    Get PDF
    Background: Previous research suggests that the use of outcome feedback technology can enable psychological therapists to identify and resolve obstacles to clinical improvement. We aimed to assess the effectiveness of an outcome feedback quality assurance system applied in stepped care psychological services. Methods: This multisite, open-label, cluster randomised controlled trial was done at eight National Health Service (NHS) Trusts in England, involving therapists who were qualified to deliver evidence-based low-intensity or high-intensity psychological interventions. Adult patients (18 years or older) who accessed individual therapy with participating therapists were eligible for inclusion, except patients who accessed group therapies and those who attended less than two individual therapy sessions. Therapists were randomly assigned (1:1) to an outcome feedback intervention group or a treatment-as-usual control group by use of a computer-generated randomisation algorithm. The allocation of patients to therapists was quasi-random, whereby patients on waiting lists were allocated sequentially on the basis of therapist availability. All patients received low-intensity (less than eight sessions) or high-intensity (up to 20 sessions) psychological therapies for the duration of the 1-year study period. An automated computer algorithm alerted therapists in the outcome feedback group to patients who were not on track, and primed them to review these patients in clinical supervision. The primary outcome was symptom severity on validated depression (Patient Health Questionnaire-9 [PHQ-9]) and anxiety (Generalised Anxiety Disorder-7 [GAD-7]) measures after treatment of varying durations, which were compared between groups with multilevel modelling, controlling for cluster (therapist) effects. We used an intention-to-treat approach. This trial was prospectively registered with ISRCTN, number ISRCTN12459454. Findings: In total, 79 therapists were recruited to the study between Jan 8, 2016, and July 15, 2016, but two did not participate. Of these participants, 39 (51%) were randomly assigned to the outcome feedback group and 38 (49%) to the control group. Overall, 2233 patients were included in the trial (1176 [53%] were treated by therapists in the outcome feedback group, and 1057 [47%] by therapists in the control group). Patients classified as not on track had less severe symptoms after treatment if they were allocated to the outcome feedback group than those in the control group (PHQ-9 d=0·23, B=–1·03 [95% CI −1·84 to −0·23], p=0·012; GAD-7 d=0·19, B=–0·85 [–1·56 to −0·14], p=0·019). Interpretation: Supplementing psychological therapy with low-cost feedback technology can reduce symptom severity in patients at risk of poor response to treatment. This evidence supports the implementation of outcome feedback in stepped care psychological services. Funding: English NHS and Department of Health Sciences, University of York, York, UK

    Assessing Syndromic Surveillance of Cardiovascular Outcomes from Emergency Department Chief Complaint Data in New York City

    Get PDF
    Prospective syndromic surveillance of emergency department visits has been used for near-real time tracking of communicable diseases to detect outbreaks or other unexpected disease clusters. The utility of syndromic surveillance for tracking cardiovascular events, which may be influenced by environmental factors and influenza, has not been evaluated. We developed and evaluated a method for tracking cardiovascular events using emergency department free-text chief complaints.There were three phases to our analysis. First we applied text processing algorithms based on sensitivity, specificity, and positive predictive value to chief complaint data reported by 11 New York City emergency departments for which ICD-9 discharge diagnosis codes were available. Second, the same algorithms were applied to data reported by a larger sample of 50 New York City emergency departments for which discharge diagnosis was unavailable. From this more complete data, we evaluated the consistency of temporal variation of cardiovascular syndromic events and hospitalizations from 76 New York City hospitals. Finally, we examined associations between particulate matter ≤2.5 µm (PM(2.5)), syndromic events, and hospitalizations. Sensitivity and positive predictive value were low for syndromic events, while specificity was high. Utilizing the larger sample of emergency departments, a strong day of week pattern and weak seasonal trend were observed for syndromic events and hospitalizations. These time-series were highly correlated after removing the day-of-week, holiday, and seasonal trends. The estimated percent excess risks in the cold season (October to March) were 1.9% (95% confidence interval (CI): 0.6, 3.2), 2.1% (95% CI: 0.9, 3.3), and 1.8% (95%CI: 0.5, 3.0) per same-day 10 µg/m(3) increase in PM(2.5) for cardiac-only syndromic data, cardiovascular syndromic data, and hospitalizations, respectively.Near real-time emergency department chief complaint data may be useful for timely surveillance of cardiovascular morbidity related to ambient air pollution and other environmental events

    The Second Monocular Depth Estimation Challenge

    Full text link
    This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC). This edition was open to methods using any form of supervision, including fully-supervised, self-supervised, multi-task or proxy depth. The challenge was based around the SYNS-Patches dataset, which features a wide diversity of environments with high-quality dense ground-truth. This includes complex natural environments, e.g. forests or fields, which are greatly underrepresented in current benchmarks. The challenge received eight unique submissions that outperformed the provided SotA baseline on any of the pointcloud- or image-based metrics. The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%. Supervised submissions generally leveraged large collections of datasets to improve data diversity. Self-supervised submissions instead updated the network architecture and pretrained backbones. These results represent a significant progress in the field, while highlighting avenues for future research, such as reducing interpolation artifacts at depth boundaries, improving self-supervised indoor performance and overall natural image accuracy.Comment: Published at CVPRW202
    • …
    corecore